Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
We provide compelling evidence for the potential of hardness-vs.-randomness approaches to make progress on the long-standing problem of derandomizing space-bounded computation. Our first contribution is a derandomization of bounded-space machines from hardness assumptions for classes of uniform deterministic algorithms, for which strong (but non-matching) lower bounds can be unconditionally proved. We prove one such result for showing that BPL=L “on average”, and another similar result for showing that BPSPACE[O(n)]=DSPACE[O(n)]. Next, we significantly improve the main results of prior works on hardness-vs.-randomness for logspace. As one of our results, we relax the assumptions needed for derandomization with minimal memory footprint (i.e., showing BPSPACE[S]⊆ DSPACE[c · S] for a small constant c), by completely eliminating a cryptographic assumption that was needed in prior work. A key contribution underlying all of our results is non-black-box use of the descriptions of space-bounded Turing machines, when proving hardness-to-randomness results. That is, the crucial point allowing us to prove our results is that we use properties that are specific to space-bounded machines.more » « less
-
Kumar, Amit; Ron-Zewi, Noga (Ed.)The Gilbert-Varshamov (GV) bound is a classical existential result in coding theory. It implies that a random linear binary code of rate ε² has relative distance at least 1/2 - O(ε) with high probability. However, it is a major challenge to construct explicit codes with similar parameters. One hope to derandomize the Gilbert-Varshamov construction is with code concatenation: We begin with a (hopefully explicit) outer code 𝒞_out over a large alphabet, and concatenate that with a small binary random linear code 𝒞_in. It is known that when we use independent small codes for each coordinate, then the result lies on the GV bound with high probability, but this still uses a lot of randomness. In this paper, we consider the question of whether code concatenation with a single random linear inner code 𝒞_in can lie on the GV bound; and if so what conditions on 𝒞_out are sufficient for this. We show that first, there do exist linear outer codes 𝒞_out that are "good" for concatenation in this sense (in fact, most linear codes codes are good). We also provide two sufficient conditions for 𝒞_out, so that if 𝒞_out satisfies these, 𝒞_out∘𝒞_in will likely lie on the GV bound. We hope that these conditions may inspire future work towards constructing explicit codes 𝒞_out.more » « less
-
A Chor–Goldreich (CG) source is a sequence of random variables X = X1 ∘ … ∘ Xt, where each Xi ∼ {0,1}d and Xi has δ d min-entropy conditioned on any fixing of X1 ∘ … ∘ Xi−1. The parameter 0<δ≤ 1 is the entropy rate of the source. We typically think of d as constant and t as growing. We extend this notion in several ways, defining almost CG sources. Most notably, we allow each Xi to only have conditional Shannon entropy δ d. We achieve pseudorandomness results for almost CG sources which were not known to hold even for standard CG sources, and even for the weaker model of Santha–Vazirani sources: We construct a deterministic condenser that on input X, outputs a distribution which is close to having constant entropy gap, namely a distribution Z ∼ {0,1}m for m ≈ δ dt with min-entropy m−O(1). Therefore, we can simulate any randomized algorithm with small failure probability using almost CG sources with no multiplicative slowdown. This result extends to randomized protocols as well, and any setting in which we cannot simply cycle over all seeds, and a “one-shot” simulation is needed. Moreover, our construction works in an online manner, since it is based on random walks on expanders. Our main technical contribution is a novel analysis of random walks, which should be of independent interest. We analyze walks with adversarially correlated steps, each step being entropy-deficient, on good enough lossless expanders. We prove that such walks (or certain interleaved walks on two expanders), starting from a fixed vertex and walking according to X1∘ … ∘ Xt, accumulate most of the entropy in X.more » « less
-
Existing proofs that deduce BPP = P from circuit lower bounds convert randomized algorithms into deterministic algorithms with a large polynomial slowdown. We convert randomized algorithms into deterministic ones with little slowdown . Specifically, assuming exponential lower bounds against randomized NP ∩ coNP circuits, formally known as randomized SVN circuits, we convert any randomized algorithm over inputs of length n running in time t ≥ n into a deterministic one running in time t 2+α for an arbitrarily small constant α > 0. Such a slowdown is nearly optimal for t close to n , since under standard complexity-theoretic assumptions, there are problems with an inherent quadratic derandomization slowdown. We also convert any randomized algorithm that errs rarely into a deterministic algorithm having a similar running time (with pre-processing). The latter derandomization result holds under weaker assumptions, of exponential lower bounds against deterministic SVN circuits. Our results follow from a new, nearly optimal, explicit pseudorandom generator fooling circuits of size s with seed length (1+α)log s , under the assumption that there exists a function f ∈ E that requires randomized SVN circuits of size at least 2 (1-α′) n , where α = O (α)′. The construction uses, among other ideas, a new connection between pseudoentropy generators and locally list recoverable codes.more » « less
-
Saraf, Shubhangi (Ed.)There are only a few known general approaches for constructing explicit pseudorandom generators (PRGs). The "iterated restrictions" approach, pioneered by Ajtai and Wigderson [Ajtai and Wigderson, 1989], has provided PRGs with seed length polylog n or even Õ(log n) for several restricted models of computation. Can this approach ever achieve the optimal seed length of O(log n)? In this work, we answer this question in the affirmative. Using the iterated restrictions approach, we construct an explicit PRG for read-once depth-2 AC⁰[⊕] formulas with seed length O(log n) + Õ(log(1/ε)). In particular, we achieve optimal seed length O(log n) with near-optimal error ε = exp(-Ω̃(log n)). Even for constant error, the best prior PRG for this model (which includes read-once CNFs and read-once 𝔽₂-polynomials) has seed length Θ(log n ⋅ (log log n)²) [Chin Ho Lee, 2019]. A key step in the analysis of our PRG is a tail bound for subset-wise symmetric polynomials, a generalization of elementary symmetric polynomials. Like elementary symmetric polynomials, subset-wise symmetric polynomials provide a way to organize the expansion of ∏_{i=1}^m (1 + y_i). Elementary symmetric polynomials simply organize the terms by degree, i.e., they keep track of the number of variables participating in each monomial. Subset-wise symmetric polynomials keep track of more data: for a fixed partition of [m], they keep track of the number of variables from each subset participating in each monomial. Our tail bound extends prior work by Gopalan and Yehudayoff [Gopalan and Yehudayoff, 2014] on elementary symmetric polynomials.more » « less
-
Byrka, Jarosław; Meka, Raghu (Ed.)
An official website of the United States government

Full Text Available